Prerequisites:
  • A Valid Qubrid AI Account logged in the platform
  • Enough Credits in your Account should be present to make sure the GPU Instances work when provisioning
The AI/ML Templates catalog provides preconfigured environments that can be quickly deployed on high-performance GPU virtual machines. These templates include popular frameworks, open-source large language models (LLMs), orchestration tools, and workflow engines, saving you the effort of manual setup.

How It Works

1

Head over to the AI / ML tab from the left menu

This will list all the templates available for quick use
2

Select a template from the catalog

Choose an AI/ML template from the catalog that matches your use case. Templates include OS images, frameworks, workflow tools, and preconfigured LLMs.
3

Select a GPU Instance

Once you do this, a dropdown appears, choose the GPU instance where you want to deploy with this AI/ML template
4

Review the Instance

Here you will be able to see the RAM, vCPU, System Storage, OS, Python Version, CUDA Version & Max GPUs allocated
5

Customise the Instance

Here you will be able to change anything which you wound want. Select commitment period and click Launch Button

Available Templates

Base Operating Systems

  • Ubuntu 22.04 - Standard Linux environment with Python 3.10.12
  • Ubuntu 24.04 - Updated Linux environment with Python 3.12.3

Frameworks & Libraries

  • TensorFlow 2.17.1 - Preinstalled ML framework for training and inference
  • PyTorch 2.4.0 - GPU-accelerated deep learning framework with NumPy/SciPy support

Workflow & Automation Tools

  • n8n - Low-code/no-code automation platform with 400+ integrations and built-in AI nodes
  • Langflow - Drag-and-drop interface for designing, testing, and deploying AI pipelines

Model Serving Interfaces

  • ComfyUI v0.3.50 / v0.3.52 - Node-based generative AI interface for inference and workflow chaining
  • GPT OSS (20B) [Open WebUI] - Open-source variant for experimenting with GPT-style models via Ollama
  • Qwen 3 (Latest) [Open WebUI] - Large language model interface for managing Qwen models
  • Gemma 3 (27B) [Open WebUI] - Google’s 27B parameter model optimized for efficiency and accuracy
  • DeepSeek R1 (671B) [Open WebUI] - One of the largest-scale open LLMs, optimized for reasoning-heavy workloads

Notes

  • All templates are deployed on top of Ubuntu Linux with the specified Python runtime
  • Templates that include Open WebUI provide a browser-based interface for model interaction
  • Templates can be combined with other GPU services such as storage, agents, and RAG workflows

Key benefits

  • Improved Performance: Gain higher accuracy and better predictive power by customizing models to your data
  • Reduced Training Time: Efficient hyperparameter optimization techniques save compute resources and time
  • Flexible Approaches: Supports fine-tuning pre-trained models or training from scratch
  • Automated Hyperparameter Search: Integration with tools that support grid search, random search, and advanced optimization (e.g., Bayesian optimization, Hyperopt)
  • Scalable & Repeatable: Easily reproduce tuning experiments and scale across GPU instances

Available Templates

We constantly keep on adding the new & latest templates. Keep an eye out on plaform for updates.